Applying suction grippers in unstructured environments is a challenging task because of depth and tilt errors in vision systems, requiring additional costs in elaborate sensing and control. To reduce additional costs, suction grippers with compliant bodies or mechanisms have been proposed; however, their bulkiness and limited allowable error hinder their use in complex environments with large errors. Here, we propose a compact suction gripper that can pick objects over a wide range of distances and tilt angles without elaborate sensing and control. The spring-inserted gripper body deploys and conforms to distant and tilted objects until the suction cup completely seals with the object and retracts immediately after, while holding the object. This seamless deployment and retraction is enabled by connecting the gripper body and suction cup to the same vacuum source, which couples the vacuum picking and retraction of the gripper body. Experimental results validated that the proposed gripper can pick objects within 79 mm, which is 1.4 times the initial length, and can pick objects with tilt angles up to 60{\deg}. The feasibility of the gripper was verified by demonstrations, including picking objects of different heights from the same picking height and the bin picking of transparent objects.
translated by 谷歌翻译
在这项工作中,我们介绍了无文本视觉语言变压器(TVLT),其中均匀的变压器块使用最小的模态设计进行视觉和语言表示的原始视觉和音频输入,并且不使用特定于文本的模块,例如作为令牌化或自动语音识别(ASR)。 TVLT通过重建连续视频帧和音频谱图(蒙版自动编码)和对比度建模以使视频和音频对比度建模进行训练。 TVLT在各种多模式任务上的性能与其基于文本的对应物相当,例如视觉询问,图像检索,视频检索和多模式情感分析,具有28倍的推理速度和仅1/3参数。我们的发现表明,从低级视觉和音频信号中学习紧凑,有效的视觉语言表示的可能性,而无需假设文本的先前存在。我们的代码和检查点可在以下网址找到:https://github.com/zinengtang/tvlt
translated by 谷歌翻译
最近在各种领域中采用了关于下游任务的大型预训练模型。但是,更新大型预训练模型的整个参数集是昂贵的。尽管最近提出的参数效率转移学习(PETL)技术允许在预先训练的骨干网络内更新一小部分参数(例如,仅使用2%的参数)用于新任务,但它们只能通过最多减少训练记忆要求30%。这是因为可训练参数的梯度计算仍然需要通过大型预训练的骨干模型反向传播。为了解决这个问题,我们提出了梯子侧调(LST),这是一种新的PETL技术,可将训练记忆要求减少更多。与现有的参数效率方法不同,将其他参数插入骨干网络中,我们训练梯子侧网络,梯子侧网络是一个小而独立的网络,将中间激活作为通过快速连接(梯子)从骨干网络中获得的输入作为输入,并进行预测。 LST的内存要求明显低于以前的方法,因为它不需要通过骨干网络反向传播,而是仅通过侧网和梯子连接。我们使用NLP(胶)和视觉语言(VQA,GQA,NLVR2,MSCOCO)任务上的各种模型(T5,CLIP-T5)进行评估。 LST节省了69%的内存成本来微调整个网络,而其他方法仅将其中的26%保存在相似的参数使用中(因此,更多的内存节省了2.7倍)。此外,LST在低内存状态下的适配器和洛拉的精度高。为了进一步显示这种更好的记忆效率的优势,我们还将LST应用于较大的T5型号(T5-Large,T5-3B),比完整的微调和其他PETL方法获得更好的胶水性能。我们对VL任务的实验也完全相同。
translated by 谷歌翻译
最近,对建立问题的兴趣越来越兴趣,其中跨多种模式(如文本和图像)的原因。但是,使用图像的QA通常仅限于从预定义的选项集中挑选答案。此外,在现实世界中的图像,特别是在新闻中,具有与文本共同参考的对象,其中来自两个模态的互补信息。在本文中,我们提出了一种新的QA评估基准,并在新闻文章中提出了1,384个问题,这些文章需要跨媒体接地图像中的物体接地到文本上。具体地,该任务涉及需要推理图像标题对的多跳问题,以识别接地的视觉对象,然后从新闻正文文本中预测跨度以回答问题。此外,我们介绍了一种新颖的多媒体数据增强框架,基于跨媒体知识提取和合成问题答案生成,自动增强可以为此任务提供弱监管的数据。我们在我们的基准测试中评估了基于管道和基于端到端的预先预测的多媒体QA模型,并表明他们实现了有希望的性能,而在人类性能之后大幅滞后,因此留下了未来工作的大型空间,以便在这一具有挑战性的新任务上的工作。
translated by 谷歌翻译
最近,在大型文本语料库上预先培训的微调语言模型已经为Vision-and Langual(V&L)任务以及纯语言任务提供了巨大的改进。但是,微调预训练模型的整个参数集变得不切实际,因为模型大小正在快速增长。因此,在本文中,我们将基于适配器的参数高效转移学习技术引入VL-BART和VL-T5等V&L型号。我们在四个不同V&L任务的统一多任务设置中评估我们的方法:VQAV2,GQA,NLVR2和MSCOCO图像标题。通过仔细的培训和彻底的实验,我们将三种流行的基于适配器的方法(适配器,Hyperformer,Compacter)基准,抵御标准的全部微调和最近提出的及时调整方法。我们还通过分享其权重以获得跨任务的知识来增强适配器的效率和性能。我们的结果表明,使用权重共享技术(总参数的4.4%)培训适配器可以匹配微调整个模型的性能。最后,我们提出了一个全面的分析,包括适配器和任务特定提示的组合以及V&L对适配器进行培训的影响。我们的代码可用于:https://github.com/ylsung/vl_adapter。
translated by 谷歌翻译
Given a large graph with few node labels, how can we (a) identify the mixed network-effect of the graph and (b) predict the unknown labels accurately and efficiently? This work proposes Network Effect Analysis (NEA) and UltraProp, which are based on two insights: (a) the network-effect (NE) insight: a graph can exhibit not only one of homophily and heterophily, but also both or none in a label-wise manner, and (b) the neighbor-differentiation (ND) insight: neighbors have different degrees of influence on the target node based on the strength of connections. NEA provides a statistical test to check whether a graph exhibits network-effect or not, and surprisingly discovers the absence of NE in many real-world graphs known to have heterophily. UltraProp solves the node classification problem with notable advantages: (a) Accurate, thanks to the network-effect (NE) and neighbor-differentiation (ND) insights; (b) Explainable, precisely estimating the compatibility matrix; (c) Scalable, being linear with the input size and handling graphs with millions of nodes; and (d) Principled, with closed-form formula and theoretical guarantee. Applied on eight real-world graph datasets, UltraProp outperforms top competitors in terms of accuracy and run time, requiring only stock CPU servers. On a large real-world graph with 1.6M nodes and 22.3M edges, UltraProp achieves more than 9 times speedup (12 minutes vs. 2 hours) compared to most competitors.
translated by 谷歌翻译
In this paper, we propose a diffusion-based face swapping framework for the first time, called DiffFace, composed of training ID conditional DDPM, sampling with facial guidance, and a target-preserving blending. In specific, in the training process, the ID conditional DDPM is trained to generate face images with the desired identity. In the sampling process, we use the off-the-shelf facial expert models to make the model transfer source identity while preserving target attributes faithfully. During this process, to preserve the background of the target image and obtain the desired face swapping result, we additionally propose a target-preserving blending strategy. It helps our model to keep the attributes of the target face from noise while transferring the source facial identity. In addition, without any re-training, our model can flexibly apply additional facial guidance and adaptively control the ID-attributes trade-off to achieve the desired results. To the best of our knowledge, this is the first approach that applies the diffusion model in face swapping task. Compared with previous GAN-based approaches, by taking advantage of the diffusion model for the face swapping task, DiffFace achieves better benefits such as training stability, high fidelity, diversity of the samples, and controllability. Extensive experiments show that our DiffFace is comparable or superior to the state-of-the-art methods on several standard face swapping benchmarks.
translated by 谷歌翻译
Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning. However, the ICL performance does not scale well with the number of available training samples as it is limited by the inherent input length constraint of the underlying language model. Meanwhile, many studies have revealed that language models are also powerful feature extractors, allowing them to be utilized in a black-box manner and enabling the linear probing paradigm, where lightweight discriminators are trained on top of the pre-extracted input representations. This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear probing and ICL, which leverages the best of both worlds. PALP inherits the scalability of linear probing and the capability of enforcing language models to derive more meaningful representations via tailoring input into a more conceivable form. Throughout in-depth investigations on various datasets, we verified that PALP significantly enhances the input representations closing the gap between ICL in the data-hungry scenario and fine-tuning in the data-abundant scenario with little training overhead, potentially making PALP a strong alternative in a black-box scenario.
translated by 谷歌翻译
Task-oriented dialogue (TOD) systems are mainly based on the slot-filling-based TOD (SF-TOD) framework, in which dialogues are broken down into smaller, controllable units (i.e., slots) to fulfill a specific task. A series of approaches based on this framework achieved remarkable success on various TOD benchmarks. However, we argue that the current TOD benchmarks are limited to surrogate real-world scenarios and that the current TOD models are still a long way from unraveling the scenarios. In this position paper, we first identify current status and limitations of SF-TOD systems. After that, we explore the WebTOD framework, the alternative direction for building a scalable TOD system when a web/mobile interface is available. In WebTOD, the dialogue system learns how to understand the web/mobile interface that the human agent interacts with, powered by a large-scale language model.
translated by 谷歌翻译
There is significant interest in deploying machine learning algorithms for diagnostic radiology, as modern learning techniques have made it possible to detect abnormalities in medical images within minutes. While machine-assisted diagnoses cannot yet reliably replace human reviews of images by a radiologist, they could inform prioritization rules for determining the order by which to review patient cases so that patients with time-sensitive conditions could benefit from early intervention. We study this scenario by formulating it as a learning-augmented online scheduling problem. We are given information about each arriving patient's urgency level in advance, but these predictions are inevitably error-prone. In this formulation, we face the challenges of decision making under imperfect information, and of responding dynamically to prediction error as we observe better data in real-time. We propose a simple online policy and show that this policy is in fact the best possible in certain stylized settings. We also demonstrate that our policy achieves the two desiderata of online algorithms with predictions: consistency (performance improvement with prediction accuracy) and robustness (protection against the worst case). We complement our theoretical findings with empirical evaluations of the policy under settings that more accurately reflect clinical scenarios in the real world.
translated by 谷歌翻译